Current Issue : April - June Volume : 2020 Issue Number : 2 Articles : 5 Articles
Artificial heart valves, used to replace diseased human heart valves, are life-saving medical\ndevices. Currently, at the device development stage, new artificial valves are primarily assessed\nthrough time-consuming and expensive benchtop tests or animal implantation studies. Computational\nstress analysis using the finite element (FE) method presents an attractive alternative to physical\ntesting. However, FE computational analysis requires a complex process of numeric modeling and\nsimulation, as well as in-depth engineering expertise. In this proof of concept study, our objective\nwas to develop machine learning (ML) techniques that can estimate the stress and deformation of\na transcatheter aortic valve (TAV) from a given set of TAV leaflet design parameters. Two deep\nneural networks were developed and compared: the autoencoder-based ML-models and the direct\nML-models. The ML-models were evaluated through Monte Carlo cross validation. From the\nresults, both proposed deep neural networks could accurately estimate the deformed geometry of\nthe TAV leaflets and the associated stress distributions within a second, with the direct ML-models\n(ML-model-d) having slightly larger errors. In conclusion, although this is a proof-of-concept study,\nthe proposed ML approaches have demonstrated great potential to serve as a fast and reliable tool for\nfuture TAV design....
Backpropagation neural network algorithms are one of the most widely used algorithms in the current neural network algorithm.\nIt uses the output error rate to estimate the error rate of the direct front layer of the output layer, so that we can get the error rate of\neach layer through the layer-by-layer backpropagation.Thepurpose of this paper is to simulate the decryption process of DES with\nbackpropagation algorithm. By inputting a large number of plaintext and ciphertext pairs, a neural network simulator for the\ndecryption of the target cipher is constructed, and the ciphertext given is decrypted. In this paper, how to modify the backpropagation\nneural network classifier and apply it to the process of building the regression analysis model is introduced in detail.\nThe experimental results show that the final result of restoring plaintext of the neural network model built in this paper is ideal,\nand the fitting rate is higher than 90% compared with the true plaintext....
This paper aimed to establish a nonlinear relationship between laser cladding process\nparameters and the crack density of a high-hardness, nickel-based laser cladding layer, and to\ncontrol the cracking of the cladding layer via an intelligent algorithm. By using three main process\nparameters (overlap rate, powder feed rate, and scanning speed), an orthogonal experiment was\ndesigned, and the experimental results were used as training and testing datasets for a neural\nnetwork. A neural network prediction model between the laser cladding process parameters and\ncoating crack density was established, and a genetic algorithm was used to optimize the prediction\nresults. To improve their prediction accuracy, genetic algorithms were used to optimize the weights\nand thresholds of the neural networks. In addition, the performance of the neural network was\ntested. The results show that the order of influence on the coating crack sensitivity was as follows:\noverlap rate > powder feed rate > scanning speed. The relative error between the predicted value\nand the experimental value of the three-group test genetic algorithm-optimized neural network\nmodel was less than 9.8%. The genetic algorithm optimized the predicted results, and the\ntechnological parameters that resulted in the smallest crack density were as follows: powder feed\nrate of 15.0726 g/min, overlap rate of 49.797%, scanning speed of 5.9275 mm/s, crack density of\n0.001272 mm/mm2. Therefore, the amount of crack generation was controlled by the optimization\nof the neural network and genetic algorithm process....
To overcome huge resource consumption of neural networks training, MLaaS (Machine Learning as a Service) has become an\nirresistible trend, just like SaaS (Software as a Service), PaaS (Platform as a Service), and IaaS (Infrastructure as a Service) have\nbeen. But it comes with some security issues of untrustworthy third-party services. Especially machine learning providers may\ndeploy trojan backdoors in provided models for the pursuit of extra profit or other illegal purposes. Against the redundant nodesbased\ntrojaning attack on neural networks, we proposed a novel detecting method, which only requires the untrusted model to be\ntested and a small batch of legitimate dataset. By comparing different processes of neural networks training, we found that the\nembedding of malicious nodes will make their parameter configuration abnormal. Moreover, by analysing the cost distribution of\ntest dataset on network nodes, we successfully detect the trojaned nodes in the neural networks. As far as we know, the research on\nthe defence against trojaning attack on neural networks is still in its infancy, and our research may shed light on the security of\nMLaaS in real-life scenarios....
The analysis of frame sequences in talk show videos, which is necessary for media mining\nand television production, requires significant manual efforts and is a very time-consuming process.\nGiven the vast amount of unlabeled face frames from talk show videos, we address and propose a\nsolution to the problem of recognizing and clustering faces. In this paper, we propose a TV media\nmining system that is based on a deep convolutional neural network approach, which has been\ntrained with a triplet loss minimization method. The main function of the proposed system is\nthe indexing and clustering of video data for achieving an effective media production analysis of\nindividuals in talk show videos and rapidly identifying a specific individual in video data in real-time\nprocessing. Our system uses several face datasets from Labeled Faces in theWild (LFW), which is\na collection of unlabeled web face images, as well as YouTube Faces and talk show faces datasets.\nIn the recognition (person spotting) task, our system achieves an F-measure of 0.996 for the collection\nof unlabeled web face images dataset and an F-measure of 0.972 for the talk show faces dataset. In the\nclustering task, our system achieves an F-measure of 0.764 and 0.935 for the YouTube Faces database\nand the LFW dataset, respectively, while achieving an F-measure of 0.832 for the talk show faces\ndataset, an improvement of 5.4%, 6.5%, and 8.2% over the previous methods....
Loading....